15 research outputs found

    A gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers

    Get PDF
    Software pipelines enable organizations to chain applications for adding value to contents (e.g., confidentially, reliability, and integrity) before either sharing them with partners or sending them to the cloud. However, the pipeline components add overhead when processing large volumes of data, which can become critical in real-world scenarios. This paper presents a gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers. In this model, the gears represent applications, whereas gearboxes represent software pipelines. This model was implemented as a collaborative system that automatically performs Gear up (by using parallel patterns) and/or Gear down (by using in-memory storage) until all gears produce uniform data processing velocities. This model reduces delays and bottlenecks produced by the heterogeneous performance of applications included in software pipelines. The new container tool has been designed to encapsulate both the collaborative system and the software pipelines into a virtual container and deploy it on IT infrastructures. We conducted case studies to evaluate the performance of when processing medical images and PDF repositories. The incorporation of a capsule to a cloud storage service for pre-processing medical imagery was also studied. The experimental evaluation revealed the feasibility of applying the gearbox model to the deployment of software pipelines in real-world scenarios as it can significantly improve the end-user service experience when pre-processing large-scale data in comparison with state-of-the-art solutions such as Sacbe and Parsl.This work has been partially supported by the “Spanish Ministerio de Economia y Competitividad ” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Data paradigms”

    Optimización multiobjetivo usando un micro algoritmo genético

    No full text
    Tesis de Maestría en Inteligencia Artificial presentada a la Faculta de Física e Inteligencia Artificial de la Universidad Veracruzana, Región Xalapa

    Using Clustering Techniques to Improve the Performance of a Particle Swarm Optimizer

    No full text
    Abstract. In this paper, we present an extension of the heuristic called “particle swarm optimization ” (PSO) that is able to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and is based on the idea of having a set of sub-swarms instead of single particles. In each sub-swarm, a PSO algorithm is executed and, at some point, the different sub-swarms exchange information. Our proposed approach is validated using several test functions taken from the evolutionary multiobjective optimization literature. Our results indicate that the approach is highly competitive with respect to algorithms representative of the state-of-the-art in evolutionary multiobjective optimization.

    Evolutionary Computation Group

    No full text
    Abstract In this paper, we present a genetic algorithm with a very small population and a reinitialization process (a mi-cro genetic algorithm) for solving multiobjective optimization problems. Our approach uses three forms of elitism, includ-ing an external memory (or secondary population) to keep the nondominated solutions found along the evolutionary pro-cess. We validate our proposal using several engineering op-timization problems taken from the specialized literature, and we compare our results with respect to two other algorithms (the NSGA-II and PAES) using three different metrics. Our results indicate that our approach is very efficient (computa

    Current and Future Research Trends in Evolutionary Multiobjective Optimization

    No full text
    In this chapter we present a brief analysis of the current research performed on evolutionary multiobjective optimization. After analyzing first and second generation multiobjective evolutionary algorithms, we address two important issues: the role of elitism in evolutionary multiobjective optimization and the way in which concepts from multiobjective optimization can be applied to constraint-handling techniques. We conclude with a discussion of some of the most promising research trends in the years to come

    A Proposal to Use Stripes to Maintain Diversity in a Multi-Objective Particle Swarm Optimizer

    No full text
    In this paper, we propose a new mechanism to maintain diversity in multi-objective optimization problems. The proposed mechanism is based on the use of stripes that are applied on objective function space and that is independent of the search engine adopted to solve the multi-objective optimization problem. In order to validate the proposed approach, we included it in a multi-objective particle swarm optimizer. Our approach was compared with respect to two multi-objective evolutionary algorithms which are representative of the state-of-the-art in the area. The results obtained indicate that our proposed mechanism is a viable alternative to maintain diversity in the context of multi-objective optimization. 1

    EMOPSO: A Multi-Objective Particle Swarm Optimizer with Emphasis on Efficiency

    No full text
    Abstract. This paper presents the Efficient Multi-Objective Particle Swarm Optimizer (EMOPSO), which is an improved version of a multiobjective evolutionary algorithm (MOEA) previously proposed by the authors. Throughout the paper, we provide several details of the design process that led us to EMOPSO. The main issues discussed are: the mechanism to maintain a set of well-distributed nondominated solutions, the turbulence operator that avoids premature convergence, the constraint-handling scheme, and the study of parameters that led us to propose a self-adaptation mechanism. The final algorithm is able to produce reasonably good approximations of the Pareto front of problems with up to 30 decision variables, while performing only 2,000 fitness function evaluations. As far as we know, this is the lowest number of evaluations reported so far for any multi-objective particle swarm optimizer. Our results are compared with respect to the NSGA-II in 12 test functions taken from the specialized literature.
    corecore